Diffusion models


Diffusion models are a class of generative models that learn the probability distribution of data by iteratively applying a series of transformations to a simple base distribution. They have been used in various applications, including image generation, text generation, and density estimation.

PROBE: Diagnosing Residual Concept Capacity in Erased Text-to-Video Diffusion Models

Add code
Mar 23, 2026
Viaarxiv icon

Confidence-Based Decoding is Provably Efficient for Diffusion Language Models

Add code
Mar 23, 2026
Viaarxiv icon

Adaptive Video Distillation: Mitigating Oversaturation and Temporal Collapse in Few-Step Generation

Add code
Mar 23, 2026
Viaarxiv icon

End-to-End Training for Unified Tokenization and Latent Denoising

Add code
Mar 23, 2026
Viaarxiv icon

ADaFuSE: Adaptive Diffusion-generated Image and Text Fusion for Interactive Text-to-Image Retrieval

Add code
Mar 23, 2026
Viaarxiv icon

Repurposing Geometric Foundation Models for Multi-view Diffusion

Add code
Mar 23, 2026
Viaarxiv icon

Autoregressive vs. Masked Diffusion Language Models: A Controlled Comparison

Add code
Mar 23, 2026
Viaarxiv icon

DA-VAE: Plug-in Latent Compression for Diffusion via Detail Alignment

Add code
Mar 23, 2026
Viaarxiv icon

WorldCache: Content-Aware Caching for Accelerated Video World Models

Add code
Mar 23, 2026
Viaarxiv icon

Climate Prompting: Generating the Madden-Julian Oscillation using Video Diffusion and Low-Dimensional Conditioning

Add code
Mar 23, 2026
Viaarxiv icon